One foot planted in “Yeehaw!” the other in “yuppie”.

  • 1 Post
  • 5 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • I think this take is starting to be a bit outdated. There have been numerous films to use Blender. The “biggest” recent one is RRR - https://www.blender.org/user-stories/visual-effects-for-the-indian-blockbuster-rrr/

    Man in the High Castle is also another notable “professional” example - https://www.blender.org/user-stories/visual-effects-for-the-man-in-the-high-castle/

    It’s been slow, but Blender is starting to break into the larger industry. With bigger productions tending to come from non-U.S. producers.

    There is something to be said about the tooling exclusivity in U.S. studios and backroom deals. But ultimately money talks and Autodesk only has so much money to secure those rights and studios only have so much money to spend on licensing.

    I’ve been following blender since 2008 - what we have now is unimaginable in comparison to then. Real commercial viability has been reached (as a tool). What stands in the way now is a combination of entrenched interests and money. Intel shows how that’s a tenuous market position at best, and actively self destructive at worst.

    Ultimately I think your claim that it’s not used by real studios is patently and proveably false. But I will concede that it’s still an uphill battle and moneyed interests are almost impossible to defeat. They typically need to defeat themselves first sorta like Intel did.



  • I understand the sentiment… But… This is a terribly reasoned and researched article. We only need to look at the NASA to see how this is flawed.

    Blown Capacitors/Resistors, Solder failing over time and through various conditions, failing RAM/ROM/NAND chips. Just because the technology has less “moving parts” doesn’t mean its any less susceptible to environmental and age based degradation. And we only get around those challenges by necessity and really smart engineers.

    The article uses an example of a 2014 Model S - but I don’t think it’s fair to conflate 2 Million Kilometers in the span of 10 years, vs the same distance in the span of the quoted 74 years. It’s just not the same. Time brings seasonal changes which happen regardless if you drive the vehicle or not. Further, in many cases, the car computers never completely turn off, meaning that these computers are running 24/7/365. Not to mention how Tesla’s in general have poor reliability as tracked by multiple third parties.

    Perhaps if there was an easy-access panel that allowed replacement of 90% of the car’s electronics through standardized cards, that would go a long way to realizing a “Buy it for Life” vehicle. Assuming that we can just build 80 year, “all-condition” capacitors, resistors, and other components isn’t realistic or scalable.

    Whats weird is that they seem to concede the repairability aspect at the end, without any thought whatsoever as to how that impacts reliability.

    In Conclusion: A poor article, with a surface level view of reliability, using bad examples (One person’s Tesla) to prop up a narrative that EVs - as they exist - could last forever if companies wanted.



  • On a technical level, user count matters less than the user count and comment count of the instances you subscribe to. Too many subscriptions can overwhelm smaller instances and saturate a network from the perspective of Packets Per Second and your ISPs routing capacity - not to mention your router. Additionally, most ISPs block traffic traffic going to your house on Port 80 - so you’d likely need to put it behind a cloudflare tunnel for anything resembling reliability. Your ISP may be different and it’s always worth asking what restrictions they have on self-hosted services (non-business use-cases specifically). Otherwise going with your ISP’s business plan is likely a must. Outside of that, yes, you’ll need a beefy router or switch (or multiple) to handle the constant packets coming into your network.

    Then there’s a security aspect. What happens if you’re site is breached in a way that an attacker gains remote execution? Did you make sure to isolate this network from the rest of your devices? If not, you’re in for a world of hurt.

    These are all issues that are mitigated and easier to navigate on a VPS or cloud provider.

    As for the non-technical issues:

    There’s also the problem of moderation. What I mean by that is that, as a server owner you WILL end up needing to quarantine, report, and submit illegal images to the authorities. Even if you use a whitelist of only the most respectable instances. It might not happen soon, but it’s only a matter of time before your instance happens to be subscribed to a popular external community while it gets a nasty attack. Leaving you to deal with a stressful cleanup.

    When you run this on a homelab on consumer hardware, it’s easier for certain government entities to claim that you were not performing your due diligence and may even be complicit in the content’s proliferation. Now, of course, proving such a thing is always the crux, but in my view I’d rather have my site running on things that look as official as possible. The closer it resembles what an actual business might do, the better I think I’d fare under a more targeted attack - from a legal/compliance standpoint.