I feel like every day I come across 15-20 "AI-powered tool"s that “analyze” something, and none of them clearly state how they use data. This one seems harmless enough, put a profile in, it will scrape everything about them, all their personal information, their location, every post they ever made… Nothing can possibly go wrong aggregating all that personal info, right? No idea where this data is sent, where it’s stored, who it’s sold to. Kinda alarming
A toy like that is easy to create and not that expensive to offer. Much more expensive than some JavaScript or CSS, but in the end it’s not that different.
I think people don’t really understand this whole scraping thing. For example, you can torrent all of Reddit until the API-change; all the comments, profiles, usernames, including now deleted stuff. There is a lot of outrage here over Reddit cracking down on these 3rd party tools. It’s difficult to see how that outrage over cracking down on 3rd party tools, fits with this outrage here over not cracking down on 3rd party tools.
Anyway, if someone want to archive all of Bluesky, they don’t need to offer some AI toy. They can just download the content via the API.
You can still torrent Reddit pushshift data past the API change. But yea I definitely agree otherwise, these are just cheap toys that less experienced developers create for portfolios.
A toy like that is easy to create and not that expensive to offer.
Right, and the developers of Bsky didn’t think to maybe block something that scrapes all that personal information?
Like Lemmy or Mastodon, BlueSky was made with the idea of federation. While BlueSky is not there yet, federated services are inherently very easy to scrape.
Maybe it’s time for people to understand that anything they post/vote/comment/like should be considered public domain.
If that’s what you want, you should join Facebook.
The fundamental thing to understand is that the internet - and really all information processing - is about copying. There is no such thing as “looking” at a profile or a post. The text and image data is downloaded to your device. You end up with multiple copies on your device.
Sending information out, but blocking people from storing it, is fundamentally a contradiction in terms.
Bsky - like Lemmy - made the choice to make the data widely available. It is available via API and does not need to be scraped. The alternative is to do it like Reddit or even Facebook or Discord. But they can’t stop scraping, either. They can make it slower and more laborious but not stop it. Services like Facebook protect the data as best as they can to “protect your privacy”. In reality, it’s about making it hard for you to leave the platform or anyone else to benefit from your data. Either way, you can trust Zuck to protect your data as if it was his own. Because it is.
That would always by definition block all third parties.
Think of the reddit example from the person you replied to: there was a huge outcry when reddit announced shutting down their lower API tiers.
Either information is free to flow or not at all, there is no middle ground.
With that in mind: I’m sure they thought about it and decided to prioritize transparency she flexibility over security. Personally I support that decision.
I know how APIs on reddit work, but you can block people who misuse the API if they’re doing something nefarious. Some of these AI are in my honest opinion very taxing on hardware. Having to retrieve millions of posts, comments, pictures, text, on demand… and send that to who knows where for AI scraping… Sounds very costly.
It’s an open federated system, just like Lemmy. Your posts belong to everyone.
The only money to be made in the LLM craze is data scraping, collection, filtering, collation and data set selling. When in a gold rush, don’t dig, sell shovels. And AI needs a shit ton of shovels.
The only people making money are Nvidia, the third party data center operators and data brokers. Everyone else running and using the models are losing money. Even OpenAI, the biggest AI vendor, is running at a loss. Eventually the bubble will burst and data brokers will still have something to sell. In the mean time, the fastest way to increase model performance is by increasing the size, that means more data is needed to train them.
Hey hey, there’s a flourishing market for NSFW ai chatbots that I’m sure is raking in the cash essentially re-selling access credits at a higher price.
AI has tons of money.
AI companies either do this scraping, or they buy data from others who have done such scraping.
Since the AI companies are sitting on full treasure chests (venture capital), there is a vibrant market at the moment.
LLM is at its core just a text processing tool. For it to be remotely useful when you’re not generating text from nothing, you need data to process. Preferably larger amounts so you appear more useful. Scraping websites like this is a good way to get source data useful for an individual who you’re trying to convince to give you money.
So you’re saying LLM is a pyramid scheme?!? Pardon me while i clutch my pearls.
AI “tools” like this are an absolute piece of piss to create, and they are also the kind of thing that bro investors love to throw money at currently
this is just combining existing data scraping tools with LLMs to create a pretty flimsy and superfluous product. they use the data to do what they say. if they wanted to scrape data on you they can already do that. all they get from this is your interest and maybe some other PII like your email address. the LLM is just incidental here. it’s honestly not even as bad privacy wise as a “hot or not” or personality quiz.
AI is just trending. NFTs and before that crypto had their moment, and they really were everywhere. It was shoehorned into places that didn’t even make sense. And I think sane investors realized that and pulled out.